GTAE: Graph Transformer–Based Auto-Encoders for Linguistic-Constrained Text Style Transfer
نویسندگان
چکیده
Non-parallel text style transfer has attracted increasing research interests in recent years. Despite successes transferring the based on encoder-decoder framework, current approaches still lack ability to preserve content and even logic of original sentences, mainly due large unconstrained model space or too simplified assumptions latent embedding space. Since language itself is an intelligent product humans with certain grammars a limited rule-based by its nature, relieving this problem requires reconciling capacity deep neural networks intrinsic constraints from human linguistic rules. To end, we propose method called Graph Transformer–based Auto-Encoder, which models sentence as graph performs feature extraction at level, maximally retain structure sentences. Quantitative experiment results three non-parallel tasks show that our outperforms state-of-the-art methods preservation, while achieving comparable performance accuracy naturalness.
منابع مشابه
Variational Graph Auto-Encoders
Figure 1: Latent space of unsupervised VGAE model trained on Cora citation network dataset [1]. Grey lines denote citation links. Colors denote document class (not provided during training). Best viewed on screen. We introduce the variational graph autoencoder (VGAE), a framework for unsupervised learning on graph-structured data based on the variational auto-encoder (VAE) [2, 3]. This model ma...
متن کاملImage Representation Learning Using Graph Regularized Auto-Encoders
It is an important task to learn a representation for images which has low dimension and preserve the valuable information in original space. At the perspective of manifold, this is conduct by using a series of local invariant mapping. Inspired by the recent successes of deep architectures, we propose a local invariant deep nonlinear mapping algorithm, called graph regularized auto-encoder (GAE...
متن کاملSaturating Auto-Encoders
We introduce a simple new regularizer for auto-encoders whose hidden-unit activation functions contain at least one zero-gradient (saturated) region. This regularizer explicitly encourages activations in the saturated region(s) of the corresponding activation function. We call these Saturating Auto-Encoders (SATAE). We show that the saturation regularizer explicitly limits the SATAE’s ability t...
متن کاملRate-Distortion Auto-Encoders
A rekindled the interest in auto-encoder algorithms has been spurred by recent work on deep learning. Current efforts have been directed towards effective training of auto-encoder architectures with a large number of coding units. Here, we propose a learning algorithm for auto-encoders based on a rate-distortion objective that minimizes the mutual information between the inputs and the outputs ...
متن کاملTransforming Auto-Encoders
The artificial neural networks that are used to recognize shapes typically use one or more layers of learned feature detectors that produce scalar outputs. By contrast, the computer vision community uses complicated, hand-engineered features, like SIFT [6], that produce a whole vector of outputs including an explicit representation of the pose of the feature. We show how neural networks can be ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ACM Transactions on Intelligent Systems and Technology
سال: 2021
ISSN: ['2157-6904', '2157-6912']
DOI: https://doi.org/10.1145/3448733